Obesity is an endemic health issue common to many industrialized nations in the world. It is known that obesity is related to cardiovascular disease (high blood pressure, heart attack, stroke), type II diabetes, sleep apnea, metabolic syndrome, fatty liver disease and cancer CDC on obesity. According to the World Health Organization, WHO, obesity has nearly tripled between 1975 and 2021, in a time frame of less than two generations. Obesity can be measured by the BMI, is therefore a relevant health issue. The threshold of calling someone obese is a BMI \(\geq\) 30.
Our personal interest in obesity is comes from the fact that two members of this team have a background in biochemistry and medicine, respectively, and that obesity still represents a health issue which everyone knows and can observe in everyday life. So, while there is a bio-medical relevance of the topic, the concept of BMI or the phenomenon of obesity do not require everyone to have a deep domain-specific knowledge. In fact, none of us directly worked scientifically with obesity before and thus we saw it as a new and challenging topic to work on.
We are using the “the non-institutionalized civilian resident
population of the United States” (NHANES) dataset. It has been published
by the “American National Health and Nutrition Examination surveys”
since the 1960s. The full data thus covers the relevant time frame which
saw the drastic increase in obesity as reported by the WHO. The full
data can be obtained from CDC.gov. We are using a
subset of NHANES which is easily accessible through the R library
NHANES. It comprises a subset of 10,000 rows and covers a
survey period between 2009 and 2012. While not comprehensive it should
give us enough material to build a model, select different predictors
and reason about its predictions. Our goal is to build an interpretable
model which identifies and quantifies the influence of several physical
and life style-related predictors on body weight, specifically the
BMI.
In order to build an interpretable model we need to be mindful with
non-linear data transformations, high-order interactions and also need
to keep variance inflation under control. The task is made challenging
by a large number of missing values. If we simply omit all
NAs using na.omit(), which discards all rows
with any missing value, we will reduce the total information from 10,000
observations to less than 1%.
Our strategy outline for approaching this project is as follows:
pairs() plots).nhanes_select, we then approach the problem of missing data
by using multivariate imputation by chained equations MICE using the
mice library by Buuren and Groothuls-Outhoorn (2011).NAs, and with 5 versions of the
imputed data, we build our first additive model to check parameter
estimates, p-values, significance of regression as well as testing LINE
assumptions.if (!require(NHANES)) {
install.packages("NHANES", quiet = TRUE)
}
## Loading required package: NHANES
library(NHANES, quiet = TRUE)
We have chosen BMI (Body mass index (weight/height² in
\([\frac{kg}{m²}]\)) as our response
variable. In NHANES, this data is reported for participants aged 2 years
or older, so we will focus on those participants for our analysis.
Provided below are all the variables in NHANES, along with our response
variable BMI.
sort(names(NHANES)) # alphabetic order
## [1] "Age" "Age1stBaby" "AgeDecade" "AgeFirstMarij"
## [5] "AgeMonths" "AgeRegMarij" "Alcohol12PlusYr" "AlcoholDay"
## [9] "AlcoholYear" "BMI" "BMI_WHO" "BMICatUnder20yrs"
## [13] "BPDia1" "BPDia2" "BPDia3" "BPDiaAve"
## [17] "BPSys1" "BPSys2" "BPSys3" "BPSysAve"
## [21] "CompHrsDay" "CompHrsDayChild" "DaysMentHlthBad" "DaysPhysHlthBad"
## [25] "Depressed" "Diabetes" "DiabetesAge" "DirectChol"
## [29] "Education" "Gender" "HardDrugs" "HeadCirc"
## [33] "HealthGen" "Height" "HHIncome" "HHIncomeMid"
## [37] "HomeOwn" "HomeRooms" "ID" "Length"
## [41] "LittleInterest" "Marijuana" "MaritalStatus" "nBabies"
## [45] "nPregnancies" "PhysActive" "PhysActiveDays" "Poverty"
## [49] "PregnantNow" "Pulse" "Race1" "Race3"
## [53] "RegularMarij" "SameSex" "SexAge" "SexEver"
## [57] "SexNumPartnLife" "SexNumPartYear" "SexOrientation" "SleepHrsNight"
## [61] "SleepTrouble" "Smoke100" "Smoke100n" "SmokeAge"
## [65] "SmokeNow" "SurveyYr" "Testosterone" "TotChol"
## [69] "TVHrsDay" "TVHrsDayChild" "UrineFlow1" "UrineFlow2"
## [73] "UrineVol1" "UrineVol2" "Weight" "Work"
We will add all omitted variables to a dataframe
df_exclude. The variables we would like to use as
predictors will be kept in a dataframe df_keep. The
following is our reasoning for ruling out or keeping certain variables
as predictors:
Some predictors can be ruled out right away. Our response
variable is BMI, so we should not use body
Weight or Height as predictors, because
BMI is calculated by dividing the Weight by
Height.
The next group of predictors seem very closely related either by
name or logic deduction, for example, age related variables such as
Age, AgeDecade, AgeMonths. Let’s
quickly double-check if they are linearly related:
pairs(subset(NHANES, select = c('Age', 'AgeDecade', 'AgeMonths',
'AgeFirstMarij', 'AgeRegMarij')))
Age, AgeDecade and AgeMonth
are clearly collinear, so we will only keep Age. Likewise,
both variables for Marijuana use appear collinear, so we keep only one,
say AgeRegMarij and we may decide to drop it later if it is
not useful.
to_test = c("BPDia1", "BPDia2", "BPDia3", "BPDiaAve", "BPSys1", "BPSys2", "BPSys3", "BPSysAve" )
pairs(subset(NHANES, select=to_test))
The blood pressure variables fall into two groups: diastolic and
systolic blood pressure readings. We would expect there to be strongly
collinearity within each group, which is the case. So, we only keep the
average in each group BPDiaAve and
BPSysAve.
pairs() plots assessment
of the following variables’ collinearity:Let’s check all variables related to alcohol: We again performed
a pairs() plot to visualize possible collinearity, and this
graph is in the Appendix. Collinearity is not as clear in this case, but
we believe one predictor related to alcohol consumption may be
sufficient. We will keep AlcoholYear.
Let’s now investigate the collinearity of other drug-related
variables: Most of these predictors are categorical, so collinearity
cannot be seen, except for SmokeAge and
AgeRegMarij. The latter makes sense as this drug is usually
consumed via smoking. We can thus use one as a proxy for the other.
(Note: AgeRegMarij was in the age related group above as
well and we kept it). Let’s keep SmokeNow and
HardDrugs as proxies for drug abuse and its potential
effect on BMI.
Next, let’s investigate a few life-style variables related to
being physically active or the opposite thereof, screen time: Due to the
nature of these variables being categorical, a clear picture of
collinearity is not observable. Let’s keep half of these parameters for
now, which are the ones with a bit denser levels,
PhysActiveDays, TVHrsDay,
CompHrsDay.
Now let’s look into some other health related variables, such as
cholesterol and diabetes related predictors: DirectChol and
TotChol appear to be collinear, let’s keep
TotChol. Out of the diabetes related ones, we keep
Diabetes.
Let’s analyze more health related variables, such as those
related to urine volume and flow below: Urine volume and urine flow
appear collinear. Moreover, there might be collinearity between the
first and second urine measurement, respectively. Let’s keep
UrineVol1 for now.
Next we analyze a somewhat heterogenic group of variables related
to health or mental health. For example, somebody who is depressed might
show little interest in doing things. Again, collinearity is not easy to
spot in categorical variables. Let’s pick LittleInterest as
a mild form of mental health issue which might lead to little physical
activity and obesity, and HealthGen as a general health
rating.
We decide to keep Poverty which is a ratio of family
income to poverty guidelines, and drop HHIncomeMid and
HHIncome, as they both capture similar information to what
the Poverty variable captures. Similarly, we choose to keep
Race1 instead of Race3 as they both capture
similar information, and Race1 has more data compared to
Race3.
Finally, let’s add Poverty,
SleepHrsNight, Gender, Race1,
Education, and MartialStatus as we believe
they can have an effect on BMI, and we do not suspect
collinearity.
#Setting up the dataframes with the variables we will be excluding and keeping for model building
df_exclude = data.frame(predictor = c('Weight', 'Height', 'Age1stBaby', 'AgeDecade', 'AgeMonth',
'AgeRegMarij', 'Alcohol12PlusYr', 'AlcoholDay', 'Smoke100',
'SmokeAge', 'Marijuana', 'RegularMarij', "BPDia1", "BPDia2",
"BPDia3", "BPSys1", "BPSys2", "BPSys3", 'PhysActive',
'TVHrsDayChild', 'CompHrsDayChild', 'DirectChol',
'DiabetesAge', "UrineFlow1", "UrineVol2", "UrineFlow2",
"DaysPhysHlthBad", "DaysMentHlthBad", "Depressed", "Race3",
"nPregnancies"),
reason_to_omit = c('linear dependence with BMI','linear dependence with BMI', 'specific by Gender',
'collinear with Age', 'collinear with Age',
'redundant with Marijuana', 'more sparse than AlcoholYear', 'redundant with
AlcoholYear', 'redundant with SmokeNow', 'collinear with AgeRegMarij',
'redundant with AgeRegMarij, the two might be swapped', 'redundant with Marijuana',
'collinear with other blood pressure predictors', 'collinear with other blood
pressure predictors', 'collinear with other blood pressure predictors', 'collinear
with other blood pressure predictors', 'collinear with other blood pressure
predictors', 'collinear with other blood pressure predictors', 'Redundant with
PhysActiveDays', 'redundant with TVHrsDay', 'redundant with CompHrsDay', 'collinear
with TotChol', 'redundant with Diabetes', 'collinear with UrineVol1', 'collinear
with UrineVol1','collinear with UrineVol1', 'redundant with HealthGen', 'redundant
with HealthGen','redundant with HealthGen', 'redundant with Race1', 'specific by
Gender'))
#Note: 'SurveyYr' is not a predictor, we will be filtering data by SurveyYr later,after which we
#will remove SurveyYr from the predictor list.
df_keep = data.frame(predictor = c('SurveyYr', 'Age', 'AlcoholYear', 'Marijuana', 'SmokeNow',
'HardDrugs', 'BPDiaAve', 'BPSysAve', 'PhysActiveDays', 'TVHrsDay',
'CompHrsDay', 'TotChol', 'Diabetes', 'UrineVol1', 'HealthGen',
'LittleInterest', 'Poverty', 'SleepHrsNight', 'Gender',
'Race1', 'Education', 'MaritalStatus' ))
opts <- options(knitr.kable.NA = "")
knitr::kable(list(df_keep[1:11,], df_keep[12:22,]), caption = "Initial Predictors Selected",
col.names = "Predictor", booktabs = TRUE)
|
|
Next, let’s build a dataset nhanes_select using just the
above df_keep variables.
Furthermore, the NHANES dataset has data for 2 survey years: 2009-10 and 2011-12. There are certain variables, such as TVHrsDay and CompHrsDay which are only present within the later time period (2011_2012). To eliminate this large missing value problem, we will further filter our dataset down by the more recent year, 2011_12.
nhanes_select = subset(NHANES, select =c(df_keep$predictor, "BMI"))
nhanes_select = nhanes_select[nhanes_select$SurveyYr == '2011_12', ] #filtering by survey year
nhanes_select = subset(nhanes_select, select = -c(SurveyYr)) #removing SurveyYr as a column for
#model building
The resulting dataset, after the initial variable selection and
filtering above, consists of 5000 observations (rows) and 22 variables
(columns) including BMI and the chosen predictors.
Convert Categorical Variables into Factor Variables
We will now convert the categorical predictors into factors.
nhanes_select$Marijuana = as.factor(nhanes_select$Marijuana)
nhanes_select$SmokeNow = as.factor(nhanes_select$SmokeNow)
nhanes_select$HardDrugs = as.factor(nhanes_select$HardDrugs)
nhanes_select$Diabetes = as.factor(nhanes_select$Diabetes)
nhanes_select$TVHrsDay = as.factor(nhanes_select$TVHrsDay)
nhanes_select$CompHrsDay = as.factor(nhanes_select$CompHrsDay)
nhanes_select$HealthGen = as.factor(nhanes_select$HealthGen)
nhanes_select$LittleInterest = as.factor(nhanes_select$LittleInterest)
nhanes_select$Gender = as.factor(nhanes_select$Gender)
nhanes_select$Race1 = as.factor(nhanes_select$Race1)
nhanes_select$Education = as.factor(nhanes_select$Education)
nhanes_select$MaritalStatus = as.factor(nhanes_select$MaritalStatus)
It would be helpful to have a dataset which is devoid of NAs (missing values) before we conduct our regression analysis. First let’s get a quick idea of how many missing values are present in our initial dataset.
Identify which variables have majority Nan values
library(tidyverse, quiet = TRUE)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.2 ✔ readr 2.1.4
## ✔ forcats 1.0.0 ✔ stringr 1.5.0
## ✔ ggplot2 3.4.2 ✔ tibble 3.2.1
## ✔ lubridate 1.9.2 ✔ tidyr 1.3.0
## ✔ purrr 1.0.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
# Count the NA values in each column
na_counts = colSums(is.na(nhanes_select))
# Calculate the percentage of NA values in each column
total_rows = nrow(nhanes_select)
na_percentage = (na_counts / total_rows) * 100
# Create a dataframe to store the results
na_summary = data.frame(Column = names(na_counts), NA_Count = na_counts, NA_Percentage = na_percentage)
na_summary = na_summary %>%
arrange(desc(NA_Percentage))
# Print the summary
print(na_summary)
## Column NA_Count NA_Percentage
## SmokeNow SmokeNow 3440 68.80
## PhysActiveDays PhysActiveDays 2614 52.28
## Marijuana Marijuana 2557 51.14
## HardDrugs HardDrugs 2118 42.36
## AlcoholYear AlcoholYear 2016 40.32
## LittleInterest LittleInterest 1665 33.30
## Education Education 1416 28.32
## MaritalStatus MaritalStatus 1415 28.30
## HealthGen HealthGen 1202 24.04
## SleepHrsNight SleepHrsNight 1166 23.32
## TotChol TotChol 775 15.50
## BPDiaAve BPDiaAve 719 14.38
## BPSysAve BPSysAve 719 14.38
## UrineVol1 UrineVol1 501 10.02
## Poverty Poverty 325 6.50
## BMI BMI 166 3.32
## TVHrsDay TVHrsDay 141 2.82
## CompHrsDay CompHrsDay 137 2.74
## Diabetes Diabetes 64 1.28
## Age Age 0 0.00
## Gender Gender 0 0.00
## Race1 Race1 0 0.00
The table above is sorted according to NA percentage in descending
order. The top 5 predictors as far as NAs are concerned are:
SmokeNow, PhysActiveDays,
Marijuana, HardDrugs and
AlcoholYr. Half of all predictors have greater than 25%
missing values. If we eliminated all rows with any missing value, we
would be left with only 419, which is not enough observations to be
meaningful. We cannot simply proceed using this data, as any regression
tools we will use will need to eliminate many observations in order to
proceed with the statistical calculations. It would also be
inappropriate to simply eliminate these observations, although this was
previously the standard approach. Eliminating this many observations
would bring into question how well our study models represent the
underlying population. Interpretation of our results would become more
difficult, and suspicious of selective observation elimination
introducing bias. This data was also costly to produce - we prefer to
not simply cast it aside. We therefore decided to perform data
imputation for the missing data.
Data imputation involves the substitution of missing data with a different value. Although there are simple methods of replacing missing values with the mean or median of the variable in question, the most robust method is multiple imputation. Multiple imputation involves the generation of multiple complete datasets by replacing the missing values with data values which are modeled for each missing entry, from a plausible distribution. The imputation process can use a variety of methods for computing the imputed values, depending upon the underlying distribution of the observed values, and the relationship of those observed values and the other variables in the observation. Once the multiple complete datasets are generated, any analysis can be performed (such as linear regression) and the results of each analyses are pooled into one set of results.
We will perform the multiple imputation process with the
mice package below. More information regarding the
mice package can be read at the book website Flexible Imputation of Missing
Data
mice Packageif (!require(mice)) {
install.packages("mice", quiet = TRUE)
}
Here we will perform the imputation. Given the size of the data, this will take a bit of processing time. First we will remove the observations where there is no entry for BMI as there are only 166 such observations, to avoid imputation of our response variable. There are 4834 observations left after this operation.
library(mice, quiet = TRUE)
##
## Attaching package: 'mice'
## The following object is masked from 'package:stats':
##
## filter
## The following objects are masked from 'package:base':
##
## cbind, rbind
# remove the rows which have NAs for BMI
nhanes_imp = nhanes_select[!is.na(nhanes_select$BMI), ]
# perform the multiple imputation (5 datasets)
imp = mice(nhanes_imp, seed = 420, m = 5, print = FALSE)
See Appendix for density plots comparing the imputed and observed values.
Now that imputation is complete to address the missing data, we will build a complete additive model using the variables we decided to keep earlier, to allow for an initial diagnostic evaluation.
# function to report simple results from model
p_value_funct = function(modelobject) {
f = summary(modelobject)$fstatistic
p = pf(f[1],f[2],f[3],lower.tail=FALSE)
attributes(p) = NULL
return(p)
# note: the pf() function returns '0' if the values are very low
}
model_summary = function(model){
model_name = deparse(substitute(model))
fit_summary <- data.frame(model = model_name, adj_r_squared = summary(model)$adj,
p_val = signif(p_value_funct(model), 6))
knitr::kable(fit_summary, col.names = c("Model", "Adj. R-Squared", "P-Value"),
digits = 32, caption = "Model Assessment Summary", align = "lcc")
}
# perform the linear regression with each of the 5 imputed datasets
fit_add <- with(imp, lm(BMI ~ Age + AlcoholYear + Marijuana + SmokeNow +
HardDrugs + BPDiaAve + BPSysAve + PhysActiveDays + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Gender + Race1 + Education + MaritalStatus))
model_summary(fit_add$analyses[[1]]) #summary of the model with the 1st imputed dataset
| Model | Adj. R-Squared | P-Value |
|---|---|---|
| fit_add$analyses[[1]] | 0.3678 | 0 |
We will next construct a dataframe of all of our 5 imputed datasets,
with the additional values added of columns .imp for the
imputation number, and .i for the observation number within
that imputation.
imp_df = mice::complete(imp, action = "long")
When we built the additive model above, a few parameters had large p-values. Let’s check the variance inflation factors for all the predictors in this model, to see if there is any effect of collinearity on the variance of our regression estimates.
library(car, quiet = TRUE)
##
## Attaching package: 'car'
## The following object is masked from 'package:dplyr':
##
## recode
## The following object is masked from 'package:purrr':
##
## some
car::vif(fit_add$analyses[[1]])
## GVIF Df GVIF^(1/(2*Df))
## Age 3.045 1 1.745
## AlcoholYear 1.186 1 1.089
## Marijuana 1.305 1 1.142
## SmokeNow 1.397 1 1.182
## HardDrugs 1.258 1 1.121
## BPDiaAve 1.454 1 1.206
## BPSysAve 1.818 1 1.348
## PhysActiveDays 1.030 1 1.015
## TVHrsDay 1.426 6 1.030
## CompHrsDay 1.491 6 1.034
## TotChol 1.292 1 1.137
## Diabetes 1.167 1 1.080
## UrineVol1 1.050 1 1.025
## HealthGen 1.540 4 1.055
## LittleInterest 1.173 2 1.041
## Poverty 1.578 1 1.256
## SleepHrsNight 1.094 1 1.046
## Gender 1.148 1 1.072
## Race1 1.537 4 1.055
## Education 2.014 4 1.091
## MaritalStatus 2.309 5 1.087
None of the variable appear to have a large (>5) variance inflation factor which is good to see.
Now let’s do some diagnostic tests on this model to identify any potential issues.
library(lmtest)
## Loading required package: zoo
##
## Attaching package: 'zoo'
## The following objects are masked from 'package:base':
##
## as.Date, as.Date.numeric
### First, let's define some functions ###
# Function to calculate the LOOCVRMSE
calc_loocv_rmse = function(model) {
sqrt(mean((resid(model) / (1 - hatvalues(model))) ^ 2))
}
# model diagnostics
model_diagnostics = function(fit){
fit_summary <- data.frame(bptest_p = rep(0,5), shapirotest_p = rep(0,5))
for (i in 1:5){
fit_summary$bptest_p[i] = signif(unname(bptest(fit$analyses[[i]])$p.value),6)
fit_summary$shapirotest_p[i] =
signif(shapiro.test(residuals(fit$analyses[[i]]))$p.value,6)
}
knitr::kable(fit_summary, col.names = c("BP Test", "Shapiro Test"), digits = 32)
}
# model assessments
model_assess = function(fit){
fit_summary <- data.frame(adj_r_squared = rep(0,5), loocv_rmse = rep(0,5))
for (i in 1:5){
fit_summary$adj_r_squared[i] = summary(fit$analyses[[i]])$adj
fit_summary$loocv_rmse[i] = calc_loocv_rmse(fit$analyses[[i]])
}
knitr::kable(fit_summary, col.names = c("Adj. R-Squared", "LOOCV-RMSE"))
}
par(mfrow = c(2,3))
#Fitted versus Residuals Plot for the imputed dataset model
for (i in seq(1,5)) {
title = strwrap(paste("Fitted versus Residuals Plot using dataset ", as.character(i)), width = 30, simplify = FALSE)
plot(fitted(fit_add$analyses[[i]]), resid(fit_add$analyses[[1]]), col = "darkblue", pch = 20,
xlab = "Fitted", ylab = "Residuals", main = title)
abline(h=0,col = "darkorange")
}
The Fitted versus Residuals plot reveals deviation from homoscedasticity (constant variance).
#Normal Q-Q Plot for the 1st imputed dataset model
qqnorm(resid(fit_add$analyses[[1]]), col = "dodgerblue")
qqline(resid(fit_add$analyses[[1]]), col = "darkorange")
The Q-Q-Plot also shows deviations from normality.
Let’s now look at the p-values from the Shapiro-Wilk Test for normality, and the Breusch-Pagan Test for Homoscedasticity.
model_diagnostics(fit_add)
| BP Test | Shapiro Test |
|---|---|
| 0 | 0 |
| 0 | 0 |
| 0 | 0 |
| 0 | 0 |
| 0 | 0 |
The p-values for these tests, using each of the 5 imputed dataset models, are all very low, essentially 0. So we reject the null hypothesis, calling into question, both normality and homoscedasticity. However, both of these tests are susceptible to the influence of large sample sizes, so they may be less reliable in this setting.
Because of the findings from the plots above, we will perform a variance stabilizing log transformation on the response variable (BMI), fit the model again and reassess the diagnostics.
# perform the linear regression with each of the 5 imputed datasets
# and the log() transform of BMI
fit_add_log <- with(imp, lm(log(BMI) ~ Age + AlcoholYear + Marijuana + SmokeNow +
HardDrugs + BPDiaAve + BPSysAve + PhysActiveDays + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Gender + Race1 + Education + MaritalStatus))
model_summary(fit_add_log$analyses[[1]]) #summary of the model with the 1st imputed dataset
| Model | Adj. R-Squared | P-Value |
|---|---|---|
| fit_add_log$analyses[[1]] | 0.4155 | 0 |
#Comparing Fitted vs Residuals: Initial Additive model and the Log(BMI) Transformation Model
par(mfrow=c(1,2))
plot(fitted(fit_add$analyses[[1]]), resid(fit_add$analyses[[1]]), col = "darkblue", pch = 20,
xlab = "Fitted", ylab = "Residuals", main = "Fitted vs Residuals - BMI")
abline(h=0,col = "darkorange")
plot(fitted(fit_add_log$analyses[[1]]), resid(fit_add_log$analyses[[1]]), col = "darkblue", pch = 20,
xlab = "Fitted", ylab = "Residuals", main = "Fitted vs Residuals - log(BMI)")
abline(h=0,col = "darkorange")
The log transformation of BMI model looks much better, though still not perfect.
Now let’s look at the Q-Q plots:
#Comparing the Normal Q-Q Plots of the Initial Additive model and the Log(BMI) Transformation Model
par(mfrow=c(1,2))
# no transformation
qqnorm(resid(fit_add$analyses[[1]]), col = "dodgerblue", main = "Normal Q-Q Plot - BMI")
qqline(resid(fit_add$analyses[[1]]), col = "darkorange")
# log transformation
qqnorm(resid(fit_add_log$analyses[[1]]), col = "dodgerblue", main = "Normal Q-Q Plot - log(BMI)")
qqline(resid(fit_add_log$analyses[[1]]), col = "darkorange")
Again, the log transformation of BMI results is a much better appearing QQ plot. Moving forward, we will use the log transformed BMI for our model building.
Now we use the different search procedures, backwards, forwards, and
stepwise to search for models and select predictors. Notice that our 5
datasets with observed and imputed data are passed to the stepwise
function using with() which in this case returns a
mira object from the mice package.
First we will start with the additive model and perform a backward AIC model search.
# build the stepwise workflow
scope <- list(upper = ~ Age + AlcoholYear + Marijuana + SmokeNow +
HardDrugs + BPDiaAve + BPSysAve + PhysActiveDays + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Gender + Race1 + Education + MaritalStatus,
lower = ~ 1)
expr <- expression(f1 <- lm(log(BMI) ~ 1),
f2 <- step(f1, scope = scope, trace = 0))
# perform the stepwise selection with each of the 5 imputed datasets
fit <- with(imp, expr)
# count the votes for variables to keep
formulas <- lapply(fit$analyses, formula)
terms <- lapply(formulas, terms)
votes <- unlist(lapply(terms, labels))
table(votes)
## votes
## Age AlcoholYear BPDiaAve BPSysAve CompHrsDay
## 5 5 5 5 5
## Diabetes Education HardDrugs HealthGen LittleInterest
## 5 1 3 5 5
## Marijuana MaritalStatus PhysActiveDays Poverty Race1
## 1 5 3 4 5
## SleepHrsNight SmokeNow TotChol TVHrsDay UrineVol1
## 5 5 5 5 5
If we use the criterion of more than half of the datasets resulted in selection of a variable, we end up only dropping Education, Gender, and Marijuana. Let’s compare the models using anova, leaving out variables with less than 5 votes.
Anova Predictor Pruning
# remove HardDrugs
model_without = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + PhysActiveDays + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Race1 + MaritalStatus))
model_with = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
HardDrugs + BPDiaAve + BPSysAve + PhysActiveDays + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Race1 + MaritalStatus))
anova(model_without, model_with)
## test statistic df1 df2 dfcom p.value riv
## 2 ~~ 1 1.162 1 4 4794 0.3417 0.7217
This p-value is not significant, so we fail to reject the null
hypothesis and we can discard HardDrugs.
We also apply this same anova nested model approach for possible
predictor pruning to the variables PhysActiveDays and
Poverty. The analyses are available in the appendix. Based
upon the p-values for the following two tests, we fail to reject the
null hypothesis for both, and can remove PhysActiveDays,
and Poverty.
Here is the final model of this process which we will call
fit_add_aic
fit_add_aic = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus))
model_summary(fit_add_aic$analyses[[1]])
| Model | Adj. R-Squared | P-Value |
|---|---|---|
| fit_add_aic$analyses[[1]] | 0.4149 | 0 |
Let’s try a forward search using BIC, and see if we get a smaller model:
# build the stepwise workflow, full scope with all predictors
scope <- list(upper = ~ Age + AlcoholYear + Marijuana + SmokeNow +
HardDrugs + BPDiaAve + BPSysAve + PhysActiveDays + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Gender + Race1 + Education + MaritalStatus,
lower = ~ 1)
expr <- expression(f1 <- lm(log(BMI) ~ 1),
f2 <- step(f1, scope = scope, direction = "forward",
K = log(nrow(imp[["data"]])), trace = 0))
# perform the stepwise selection with each of the 5 imputed datasets
fit <- with(imp, expr)
# count the votes for variables to keep
formulas <- lapply(fit$analyses, formula)
terms <- lapply(formulas, terms)
votes <- unlist(lapply(terms, labels))
table(votes)
## votes
## Age AlcoholYear BPDiaAve BPSysAve CompHrsDay
## 5 5 5 5 5
## Diabetes Education HardDrugs HealthGen LittleInterest
## 5 1 3 5 5
## Marijuana MaritalStatus PhysActiveDays Poverty Race1
## 1 5 3 4 5
## SleepHrsNight SmokeNow TotChol TVHrsDay UrineVol1
## 5 5 5 5 5
This appears to yield the same votes as the prior method, which results in the same model.
Lastly, let’s try a Stepwise search in both directions using AIC.
# build the stepwise workflow
scope <- list(upper = ~ Age + AlcoholYear + Marijuana + SmokeNow +
HardDrugs + BPDiaAve + BPSysAve + PhysActiveDays + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Gender + Race1 + Education + MaritalStatus,
lower = ~ 1)
expr <- expression(f1 <- lm(log(BMI) ~ 1),
f2 <- step(f1, scope = scope, direction = "both",
trace = 0))
# perform the stepwise selection with each of the 5 imputed datasets
fit <- with(imp, expr)
# count the votes for variables to keep
formulas <- lapply(fit$analyses, formula)
terms <- lapply(formulas, terms)
votes <- unlist(lapply(terms, labels))
table(votes)
## votes
## Age AlcoholYear BPDiaAve BPSysAve CompHrsDay
## 5 5 5 5 5
## Diabetes Education HardDrugs HealthGen LittleInterest
## 5 1 3 5 5
## Marijuana MaritalStatus PhysActiveDays Poverty Race1
## 1 5 3 4 5
## SleepHrsNight SmokeNow TotChol TVHrsDay UrineVol1
## 5 5 5 5 5
Here agin we get the same results. For now, our additive model will
be fit_add_aic
Before we move on, there are some predictors which are not significant individually as we saw from the summary of the model earlier, so we should check for collinearity again.
car::vif(fit_add_aic$analyses[[1]])
## GVIF Df GVIF^(1/(2*Df))
## Age 2.938 1 1.714
## AlcoholYear 1.098 1 1.048
## SmokeNow 1.349 1 1.161
## BPDiaAve 1.426 1 1.194
## BPSysAve 1.770 1 1.331
## TVHrsDay 1.333 6 1.024
## CompHrsDay 1.298 6 1.022
## TotChol 1.255 1 1.120
## Diabetes 1.160 1 1.077
## UrineVol1 1.024 1 1.012
## HealthGen 1.398 4 1.043
## LittleInterest 1.146 2 1.035
## SleepHrsNight 1.082 1 1.040
## Race1 1.321 4 1.035
## MaritalStatus 2.060 5 1.075
There appear to be no major issues with collinearity in this additive model.
Additionally, we want to check which variables from this additive model are correlated with BMI to further investigate possible interaction terms, and determine if there are additional variables we could consider dropping.
Correlation Plot
library("corrplot")
## corrplot 0.92 loaded
# Subset the 'nhanes' dataset using the names of numerical columns in 'numerical_data'
nhanes_numerical_subset = nhanes_select[, c('Age', 'AlcoholYear', 'BPDiaAve', 'BPSysAve', 'UrineVol1', 'SleepHrsNight', 'TotChol' ,'BMI')]
# Calculate the correlation matrix for 'nhanes_numerical_subset'
cor_matrix = cor(nhanes_numerical_subset, use = "complete.obs")
# Create the correlation plot for 'BMI' and other numeric variables
corrplot(cor_matrix, type = "upper", tl.cex = 0.8, tl.col = "black", tl.srt = 45)
Based upon the findings of the numerical data correlation plot above,
there seems to be little if any correlation between
UrineVol1 and TotChol individually and BMI.
Because of this finding, it seems reasonable to try removing both
UrineVol1 and TotChol from the model, and
evaluate this removal with the anova nested model method.
Chi-square Analysis of Categorical Variables
We performed a Chi-square analysis to assess individual categorical variable correlation with BMI, in hope we might find variables that we could consider dropping from our models. The code and results are available in the appendix section of this document. We found that all the p-values from the chi-square tests of the categorical values are very small, so each of these variables were left in the model.
Anova Predictor Pruning
Now we will try removing UrineVol1 at an anova
significance level of \(\alpha =
0.01\). The anova analysis details are in the appendix.
Since the p-value of the anova test comparing the models with and
without UrineVol is greater than 0.01, we fail to reject
the null hypothesis, and should remove the variable.
Now let’s consider removing TotChol at a significance
level \(\alpha = 0.01\). Again, the
anova analysis details are in the appendix.
Likewise, the p-value is greater than 0.01, although it is less than 0.05, so for now we can leave it.
So our final simple additive model is below, and we
will name it final_add.
final_add = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus))
model_summary(final_add$analyses[[1]])
| Model | Adj. R-Squared | P-Value |
|---|---|---|
| final_add$analyses[[1]] | 0.4136 | 0 |
To determine if interaction terms might improve our model, we will
perform a backward AIC stepwise() function with added Age
interaction terms. This seemed the most plausible and made the most
sense from a domain perspective. It could be understandable for
instance, that alcohol use at a young age vs an older age might have
combined effects upon BMI.
# build the stepwise workflow using our fit_add_aic
# with interaction terms added as a starting point
scope <- list(upper = ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus + Age:AlcoholYear + Age:SmokeNow + Age:BPDiaAve +
Age:BPSysAve + Age:TVHrsDay + Age:CompHrsDay + Age:TotChol + Age:Diabetes + Age:HealthGen +
Age:LittleInterest + Age:SleepHrsNight + Age:Race1 + Age:MaritalStatus,
lower = ~ 1)
expr <- expression(f1 <- lm(log(BMI) ~ 1),
f2 <- step(f1, scope = scope, trace = 0))
# perform the stepwise selection with each of the 5 imputed datasets
fit <- with(imp, expr)
# count the votes for variables to keep
formulas <- lapply(fit$analyses, formula)
terms <- lapply(formulas, terms)
votes <- unlist(lapply(terms, labels))
table(votes)
## votes
## Age Age:AlcoholYear Age:BPDiaAve Age:BPSysAve
## 5 3 5 5
## Age:Diabetes Age:HealthGen Age:LittleInterest Age:MaritalStatus
## 5 5 2 5
## Age:Race1 Age:SleepHrsNight Age:SmokeNow Age:TotChol
## 5 1 5 1
## Age:TVHrsDay AlcoholYear BPDiaAve BPSysAve
## 4 5 5 5
## CompHrsDay Diabetes HealthGen LittleInterest
## 5 5 5 5
## MaritalStatus Race1 SleepHrsNight SmokeNow
## 5 5 5 5
## TotChol TVHrsDay
## 1 5
We now build a model with all the above predictors with > 3 votes.
fit_int_aic = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus + Age:SmokeNow + Age:BPDiaAve + Age:BPSysAve
+ Age:TVHrsDay + Age:Diabetes + Age:HealthGen + Age:Race1 + Age:MaritalStatus))
model_summary(fit_int_aic$analyses[[1]])
| Model | Adj. R-Squared | P-Value |
|---|---|---|
| fit_int_aic$analyses[[1]] | 0.5005 | 0 |
Anova Predictor Pruning
We conduct an anova test to check if we should keep
Age:TVHrsDay as the individual p-values from the model are
not significant.
The p-value from the anova nested model study is less than 0.05, so
for now, we will keep Age:TVHrsDay. The details from this
anova study are presented in the appendix.
We also notice from the model that the the p-values for
Race1 categories and their interaction terms are not
significant, and the estimates are low. Perhaps we can remove Race1, and
the interaction term.
The p-value from the anova nested model study is very low, less than
0.01, so we reject the null hypothesis, and will keep Race1
and the related interaction term. The details from this anova study are
presented in the appendix.
So our final model with the addition of interactions is
final_int, and is given below:
final_int = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus + Age:SmokeNow + Age:BPDiaAve + Age:BPSysAve
+ Age:TVHrsDay + Age:Diabetes + Age:HealthGen + Age:Race1 + Age:MaritalStatus))
model_summary(final_int$analyses[[1]])
| Model | Adj. R-Squared | P-Value |
|---|---|---|
| final_int$analyses[[1]] | 0.5005 | 0 |
Now we will consider if our model will benefit from any data
transformations. The numerical values present in our current model that
are available for possible transformation are
Age, AlcoholYear, BPSysAve, BPDiaAve. We investigated each,
and we found that Age had a fairly uniform distribution,
perhaps as a result of the NHANES population sampling methodologies. We
also found that AlcoholYr did not seem to be a continuous
distribution, so we did not consider transforming either of these two
numerical variables. The two numerical variables we felt might have
skewed distributions are those related to blood pressure. Let’s graph
each of their distributions. We are using the imputed data, but only one
of the 5 imputed datasets.
BPSysAve Transformation:
par(mfrow=c(1,2))
hist(imp_df[imp_df$.imp == 1, ]$BPSysAve,
main = "Histogram - BPSysAve",
xlab = "BPSysAve")
hist(log(imp_df[imp_df$.imp == 1, ]$BPSysAve),
main = "Histogram - Log(BPSysAve)",
xlab = "Log(BPSysAve)")
We notice that the non-transformed BPSysAve data appears
skewed right, so we apply a log transformation, and the distribution
improves.
BPDiasAve Trnasformation:
par(mfrow=c(1,2))
hist(imp_df[imp_df$.imp == 1, ]$BPDiaAve,
main = "Histogram - BPDiaAve",
xlab = "BPDiaAve")
hist((imp_df[imp_df$.imp == 1, ]$BPDiaAve)^2,
main = "Histogram - BPDiaAve^2",
xlab = "BPDiaAve^2")
In the case of BPDiaAve, we notice that the
non-transformed data appeared skewed left, so we apply a log
transformation, and the distribution improves somewhat, though it is a
bit right skewed, but less skewed overall.
Now we will consider our model with the log transformed BP measures:
final_trns = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
I(BPDiaAve^2) + log1p(BPSysAve) + TVHrsDay + CompHrsDay +
Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus + Age:SmokeNow + Age:I(BPDiaAve^2) +
Age:log1p(BPSysAve) + Age:TVHrsDay + Age:Diabetes + Age:HealthGen + Age:Race1 +
Age:MaritalStatus))
#Note: we apply log1p to get the natural log of (1+BPSysAve), as we were getting infinity
#for 0 and NaN for negative values of BPSysAve with log(BPSysAve)
model_summary(final_trns$analyses[[1]])
| Model | Adj. R-Squared | P-Value |
|---|---|---|
| final_trns$analyses[[1]] | 0.5006 | 0 |
We notice that our R-Squared is the same as without the transformed data, but this will represent our final transformation model and we will more fully compare all three models in the next section.
Based on our analysis, these are the final 3 models that we will conduct further outlier and diagnostic assessments on:
Additive Model log(BMI) ~ Age + AlcoholYear + SmokeNow + BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay + TotChol + Diabetes + HealthGen + LittleInterest + SleepHrsNight + Race1 + MaritalStatus
Model with Interactions log(BMI) ~ Age + AlcoholYear + SmokeNow + BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay + Diabetes + HealthGen + LittleInterest + SleepHrsNight + Race1 + MaritalStatus + Age:SmokeNow + Age:BPDiaAve + Age:BPSysAve + Age:TVHrsDay + Age:Diabetes + Age:HealthGen + Age:Race1 + Age:MaritalStatus
Model with Transformations & Interactions log(BMI) ~ Age + AlcoholYear + SmokeNow + I(BPDiaAve^2) + log1p(BPSysAve) + TVHrsDay + CompHrsDay + Diabetes + HealthGen + LittleInterest + SleepHrsNight + Race1 + MaritalStatus + Age:SmokeNow + Age:I(BPDiaAve^2) + Age:log1p(BPSysAve) + Age:TVHrsDay + Age:Diabetes + Age:HealthGen + Age:Race1 + Age:MaritalStatus
We arrived at these models using the following steps:
mice package.Age with other predictors as well
as transformations of BPDiaAve, BPSysAve.For each of the 3 models, we would now like to see if there are any outliers that are influential, and that are having a large effect on our regressions. We will conduct this assessment using Cook’s Distance.
#finding influential observations for each model
indexs_trns = which(cooks.distance(final_trns$analyses[[1]]) > 4 /
length(cooks.distance(final_trns$analyses[[1]])))
indexs_int = which(cooks.distance(final_int$analyses[[1]]) > 4 /
length(cooks.distance(final_int$analyses[[1]])))
indexs_add = which(cooks.distance(final_add$analyses[[1]]) > 4 /
length(cooks.distance(final_add$analyses[[1]])))
#removing influential observations for each model using the first imputed dataset
imp_df_1 = imp_df[imp_df$.imp == 1, ]
imp_df_1_trns_rm = imp_df_1[-indexs_trns, ]
imp_df_1_int_rm = imp_df_1[-indexs_int, ]
imp_df_1_add_rm = imp_df_1[-indexs_add, ]
Next, we will fit the 3 models afer removing the influential observations.
#Final_add Model fit with influential observations removed
final_add = lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus, data = imp_df_1_add_rm)
#Final_int Model fit with influential observations removed
final_int = lm(log(BMI) ~ Age + AlcoholYear + SmokeNow + BPDiaAve + BPSysAve +
TVHrsDay + CompHrsDay + Diabetes + HealthGen +
LittleInterest + SleepHrsNight + Race1 + MaritalStatus +
Age:SmokeNow + Age:BPDiaAve + Age:BPSysAve + Age:TVHrsDay +
Age:Diabetes + Age:HealthGen + Age:Race1 + Age:MaritalStatus,
data = imp_df_1_int_rm)
#Final_trns Model fit with influential observations removed
final_trns = lm(log(BMI) ~ Age + AlcoholYear + SmokeNow + I(BPDiaAve^2) +
log1p(BPSysAve) + TVHrsDay + CompHrsDay + Diabetes +
HealthGen + LittleInterest + SleepHrsNight + Race1 +
MaritalStatus + Age:SmokeNow + Age:I(BPDiaAve^2) +
Age:log1p(BPSysAve) + Age:TVHrsDay + Age:Diabetes +
Age:HealthGen + Age:Race1 + Age:MaritalStatus,
data = imp_df_1_trns_rm)
Additive Model
summary(final_add)
##
## Call:
## lm(formula = log(BMI) ~ Age + AlcoholYear + SmokeNow + BPDiaAve +
## BPSysAve + TVHrsDay + CompHrsDay + TotChol + Diabetes + HealthGen +
## LittleInterest + SleepHrsNight + Race1 + MaritalStatus, data = imp_df_1_add_rm)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.5099 -0.1378 -0.0015 0.1259 0.6040
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.7598562 0.0392566 70.30 < 2e-16 ***
## Age 0.0036486 0.0002120 17.21 < 2e-16 ***
## AlcoholYear -0.0002632 0.0000299 -8.79 < 2e-16 ***
## SmokeNowYes -0.0934411 0.0063034 -14.82 < 2e-16 ***
## BPDiaAve 0.0029244 0.0002190 13.35 < 2e-16 ***
## BPSysAve 0.0010541 0.0002164 4.87 1.1e-06 ***
## TVHrsDay0_to_1_hr -0.0190083 0.0221827 -0.86 0.39155
## TVHrsDay1_hr -0.0251556 0.0217972 -1.15 0.24853
## TVHrsDay2_hr 0.0117856 0.0215313 0.55 0.58415
## TVHrsDay3_hr 0.0366570 0.0218884 1.67 0.09406 .
## TVHrsDay4_hr 0.0352962 0.0224743 1.57 0.11637
## TVHrsDayMore_4_hr 0.0174311 0.0223687 0.78 0.43586
## CompHrsDay0_to_1_hr 0.0331487 0.0081012 4.09 4.4e-05 ***
## CompHrsDay1_hr 0.0775538 0.0087113 8.90 < 2e-16 ***
## CompHrsDay2_hr 0.0854689 0.0100006 8.55 < 2e-16 ***
## CompHrsDay3_hr 0.0968591 0.0119200 8.13 5.7e-16 ***
## CompHrsDay4_hr 0.1305117 0.0165882 7.87 4.5e-15 ***
## CompHrsDayMore_4_hr 0.1518918 0.0140831 10.79 < 2e-16 ***
## TotChol 0.0124496 0.0028728 4.33 1.5e-05 ***
## DiabetesYes 0.0616858 0.0113565 5.43 5.9e-08 ***
## HealthGenVgood 0.0695857 0.0085980 8.09 7.4e-16 ***
## HealthGenGood 0.1551953 0.0087329 17.77 < 2e-16 ***
## HealthGenFair 0.1701583 0.0119855 14.20 < 2e-16 ***
## HealthGenPoor 0.1868422 0.0276326 6.76 1.5e-11 ***
## LittleInterestSeveral 0.0002078 0.0079348 0.03 0.97910
## LittleInterestMost -0.0342888 0.0120251 -2.85 0.00437 **
## SleepHrsNight -0.0095926 0.0022104 -4.34 1.5e-05 ***
## Race1Hispanic -0.0291000 0.0132612 -2.19 0.02826 *
## Race1Mexican 0.0267981 0.0124142 2.16 0.03093 *
## Race1White -0.0324993 0.0091869 -3.54 0.00041 ***
## Race1Other -0.0963347 0.0125066 -7.70 1.6e-14 ***
## MaritalStatusLivePartner -0.0506892 0.0138783 -3.65 0.00026 ***
## MaritalStatusMarried -0.0465541 0.0109101 -4.27 2.0e-05 ***
## MaritalStatusNeverMarried -0.0696721 0.0121550 -5.73 1.1e-08 ***
## MaritalStatusSeparated -0.0219069 0.0226667 -0.97 0.33385
## MaritalStatusWidowed -0.1323453 0.0173123 -7.64 2.5e-14 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.183 on 4543 degrees of freedom
## Multiple R-squared: 0.495, Adjusted R-squared: 0.491
## F-statistic: 127 on 35 and 4543 DF, p-value: <2e-16
Model with Interactions
summary(final_int)
##
## Call:
## lm(formula = log(BMI) ~ Age + AlcoholYear + SmokeNow + BPDiaAve +
## BPSysAve + TVHrsDay + CompHrsDay + Diabetes + HealthGen +
## LittleInterest + SleepHrsNight + Race1 + MaritalStatus +
## Age:SmokeNow + Age:BPDiaAve + Age:BPSysAve + Age:TVHrsDay +
## Age:Diabetes + Age:HealthGen + Age:Race1 + Age:MaritalStatus,
## data = imp_df_1_int_rm)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.4822 -0.1194 -0.0061 0.1131 0.5739
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.12e+00 7.61e-02 27.86 < 2e-16 ***
## Age 2.09e-02 1.61e-03 12.96 < 2e-16 ***
## AlcoholYear -2.48e-04 2.77e-05 -8.96 < 2e-16 ***
## SmokeNowYes -1.30e-01 1.19e-02 -10.88 < 2e-16 ***
## BPDiaAve 1.31e-03 3.42e-04 3.82 0.00014 ***
## BPSysAve 9.73e-03 4.34e-04 22.41 < 2e-16 ***
## TVHrsDay0_to_1_hr -1.50e-01 4.27e-02 -3.52 0.00043 ***
## TVHrsDay1_hr -1.94e-01 4.18e-02 -4.64 3.6e-06 ***
## TVHrsDay2_hr -1.21e-01 4.14e-02 -2.92 0.00350 **
## TVHrsDay3_hr -1.09e-01 4.23e-02 -2.57 0.01021 *
## TVHrsDay4_hr -1.17e-01 4.41e-02 -2.64 0.00827 **
## TVHrsDayMore_4_hr -1.09e-01 4.40e-02 -2.48 0.01301 *
## CompHrsDay0_to_1_hr 1.35e-02 7.59e-03 1.77 0.07643 .
## CompHrsDay1_hr 4.38e-02 8.18e-03 5.36 9.0e-08 ***
## CompHrsDay2_hr 4.68e-02 9.45e-03 4.95 7.8e-07 ***
## CompHrsDay3_hr 5.68e-02 1.12e-02 5.05 4.6e-07 ***
## CompHrsDay4_hr 8.60e-02 1.54e-02 5.59 2.4e-08 ***
## CompHrsDayMore_4_hr 8.14e-02 1.33e-02 6.10 1.2e-09 ***
## DiabetesYes 1.95e-01 4.26e-02 4.57 5.0e-06 ***
## HealthGenVgood 8.43e-02 1.43e-02 5.91 3.7e-09 ***
## HealthGenGood 1.53e-01 1.49e-02 10.22 < 2e-16 ***
## HealthGenFair 2.88e-01 2.49e-02 11.57 < 2e-16 ***
## HealthGenPoor 2.95e-01 8.23e-02 3.59 0.00033 ***
## LittleInterestSeveral 3.79e-03 7.33e-03 0.52 0.60519
## LittleInterestMost -4.37e-02 1.12e-02 -3.92 8.9e-05 ***
## SleepHrsNight -5.86e-03 2.05e-03 -2.86 0.00424 **
## Race1Hispanic -2.12e-02 2.36e-02 -0.90 0.36861
## Race1Mexican 4.10e-02 2.16e-02 1.90 0.05734 .
## Race1White 1.60e-02 1.68e-02 0.95 0.34134
## Race1Other -4.23e-02 2.22e-02 -1.91 0.05662 .
## MaritalStatusLivePartner -1.13e-01 3.94e-02 -2.86 0.00424 **
## MaritalStatusMarried -1.39e-02 3.70e-02 -0.38 0.70700
## MaritalStatusNeverMarried -1.70e-01 3.70e-02 -4.59 4.5e-06 ***
## MaritalStatusSeparated -3.20e-02 6.67e-02 -0.48 0.63135
## MaritalStatusWidowed -2.01e-02 9.09e-02 -0.22 0.82455
## Age:SmokeNowYes 1.12e-03 2.81e-04 3.99 6.8e-05 ***
## Age:BPDiaAve -2.76e-05 8.45e-06 -3.26 0.00111 **
## Age:BPSysAve -1.66e-04 8.07e-06 -20.64 < 2e-16 ***
## Age:TVHrsDay0_to_1_hr 3.27e-03 1.05e-03 3.13 0.00177 **
## Age:TVHrsDay1_hr 4.55e-03 1.03e-03 4.44 9.4e-06 ***
## Age:TVHrsDay2_hr 3.54e-03 1.01e-03 3.50 0.00047 ***
## Age:TVHrsDay3_hr 3.90e-03 1.02e-03 3.83 0.00013 ***
## Age:TVHrsDay4_hr 3.96e-03 1.04e-03 3.80 0.00015 ***
## Age:TVHrsDayMore_4_hr 3.70e-03 1.04e-03 3.57 0.00036 ***
## Age:DiabetesYes -1.80e-03 7.25e-04 -2.48 0.01300 *
## Age:HealthGenVgood -6.61e-04 3.59e-04 -1.84 0.06546 .
## Age:HealthGenGood -5.87e-04 3.64e-04 -1.61 0.10680
## Age:HealthGenFair -3.17e-03 5.21e-04 -6.09 1.2e-09 ***
## Age:HealthGenPoor -2.68e-03 1.51e-03 -1.78 0.07587 .
## Age:Race1Hispanic 4.94e-04 6.03e-04 0.82 0.41283
## Age:Race1Mexican -5.08e-04 6.02e-04 -0.84 0.39855
## Age:Race1White -1.03e-03 4.10e-04 -2.52 0.01172 *
## Age:Race1Other -1.50e-03 5.62e-04 -2.67 0.00767 **
## Age:MaritalStatusLivePartner 1.94e-03 8.33e-04 2.32 0.02015 *
## Age:MaritalStatusMarried -6.38e-04 7.14e-04 -0.89 0.37208
## Age:MaritalStatusNeverMarried 4.22e-03 7.64e-04 5.52 3.6e-08 ***
## Age:MaritalStatusSeparated -5.01e-04 1.42e-03 -0.35 0.72507
## Age:MaritalStatusWidowed -3.36e-04 1.37e-03 -0.25 0.80606
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.169 on 4492 degrees of freedom
## Multiple R-squared: 0.574, Adjusted R-squared: 0.569
## F-statistic: 106 on 57 and 4492 DF, p-value: <2e-16
Model with Transformation & Interactions
summary(final_trns)
##
## Call:
## lm(formula = log(BMI) ~ Age + AlcoholYear + SmokeNow + I(BPDiaAve^2) +
## log1p(BPSysAve) + TVHrsDay + CompHrsDay + Diabetes + HealthGen +
## LittleInterest + SleepHrsNight + Race1 + MaritalStatus +
## Age:SmokeNow + Age:I(BPDiaAve^2) + Age:log1p(BPSysAve) +
## Age:TVHrsDay + Age:Diabetes + Age:HealthGen + Age:Race1 +
## Age:MaritalStatus, data = imp_df_1_trns_rm)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.4643 -0.1194 -0.0063 0.1129 0.5733
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.55e+00 2.36e-01 -6.57 5.6e-11 ***
## Age 8.48e-02 4.74e-03 17.87 < 2e-16 ***
## AlcoholYear -2.45e-04 2.77e-05 -8.83 < 2e-16 ***
## SmokeNowYes -1.29e-01 1.19e-02 -10.77 < 2e-16 ***
## I(BPDiaAve^2) 1.87e-05 3.45e-06 5.40 7.0e-08 ***
## log1p(BPSysAve) 1.01e+00 4.93e-02 20.54 < 2e-16 ***
## TVHrsDay0_to_1_hr -1.53e-01 4.26e-02 -3.60 0.00033 ***
## TVHrsDay1_hr -1.97e-01 4.18e-02 -4.71 2.6e-06 ***
## TVHrsDay2_hr -1.25e-01 4.14e-02 -3.02 0.00254 **
## TVHrsDay3_hr -1.07e-01 4.23e-02 -2.54 0.01119 *
## TVHrsDay4_hr -1.16e-01 4.41e-02 -2.64 0.00842 **
## TVHrsDayMore_4_hr -1.10e-01 4.39e-02 -2.50 0.01237 *
## CompHrsDay0_to_1_hr 1.18e-02 7.58e-03 1.55 0.12078
## CompHrsDay1_hr 4.39e-02 8.17e-03 5.37 8.1e-08 ***
## CompHrsDay2_hr 4.59e-02 9.45e-03 4.86 1.2e-06 ***
## CompHrsDay3_hr 5.63e-02 1.12e-02 5.02 5.4e-07 ***
## CompHrsDay4_hr 8.25e-02 1.54e-02 5.36 8.6e-08 ***
## CompHrsDayMore_4_hr 8.13e-02 1.33e-02 6.10 1.2e-09 ***
## DiabetesYes 2.14e-01 4.27e-02 5.03 5.2e-07 ***
## HealthGenVgood 8.32e-02 1.42e-02 5.84 5.5e-09 ***
## HealthGenGood 1.48e-01 1.49e-02 9.94 < 2e-16 ***
## HealthGenFair 2.83e-01 2.47e-02 11.46 < 2e-16 ***
## HealthGenPoor 2.81e-01 8.26e-02 3.41 0.00066 ***
## LittleInterestSeveral 2.52e-03 7.32e-03 0.34 0.73122
## LittleInterestMost -4.62e-02 1.12e-02 -4.14 3.5e-05 ***
## SleepHrsNight -6.09e-03 2.05e-03 -2.97 0.00298 **
## Race1Hispanic -2.44e-02 2.35e-02 -1.04 0.29996
## Race1Mexican 4.30e-02 2.14e-02 2.00 0.04505 *
## Race1White 1.02e-02 1.68e-02 0.61 0.54441
## Race1Other -3.97e-02 2.21e-02 -1.80 0.07218 .
## MaritalStatusLivePartner -1.21e-01 3.89e-02 -3.10 0.00193 **
## MaritalStatusMarried -2.43e-02 3.65e-02 -0.66 0.50625
## MaritalStatusNeverMarried -1.78e-01 3.65e-02 -4.88 1.1e-06 ***
## MaritalStatusSeparated -4.36e-02 6.64e-02 -0.66 0.51146
## MaritalStatusWidowed 4.23e-02 9.28e-02 0.46 0.64871
## Age:SmokeNowYes 1.09e-03 2.81e-04 3.88 0.00011 ***
## Age:I(BPDiaAve^2) -3.37e-07 7.68e-08 -4.39 1.2e-05 ***
## Age:log1p(BPSysAve) -1.76e-02 9.75e-04 -18.06 < 2e-16 ***
## Age:TVHrsDay0_to_1_hr 3.33e-03 1.05e-03 3.19 0.00144 **
## Age:TVHrsDay1_hr 4.57e-03 1.02e-03 4.46 8.4e-06 ***
## Age:TVHrsDay2_hr 3.61e-03 1.01e-03 3.57 0.00036 ***
## Age:TVHrsDay3_hr 3.84e-03 1.02e-03 3.77 0.00016 ***
## Age:TVHrsDay4_hr 3.95e-03 1.04e-03 3.79 0.00015 ***
## Age:TVHrsDayMore_4_hr 3.67e-03 1.04e-03 3.54 0.00040 ***
## Age:DiabetesYes -2.10e-03 7.26e-04 -2.89 0.00390 **
## Age:HealthGenVgood -6.15e-04 3.58e-04 -1.72 0.08634 .
## Age:HealthGenGood -5.01e-04 3.64e-04 -1.38 0.16885
## Age:HealthGenFair -3.07e-03 5.18e-04 -5.92 3.5e-09 ***
## Age:HealthGenPoor -2.29e-03 1.52e-03 -1.51 0.13127
## Age:Race1Hispanic 5.30e-04 6.03e-04 0.88 0.37887
## Age:Race1Mexican -5.52e-04 5.99e-04 -0.92 0.35712
## Age:Race1White -9.38e-04 4.09e-04 -2.29 0.02189 *
## Age:Race1Other -1.63e-03 5.59e-04 -2.91 0.00359 **
## Age:MaritalStatusLivePartner 2.11e-03 8.25e-04 2.55 0.01073 *
## Age:MaritalStatusMarried -4.29e-04 7.05e-04 -0.61 0.54281
## Age:MaritalStatusNeverMarried 4.44e-03 7.56e-04 5.87 4.8e-09 ***
## Age:MaritalStatusSeparated -3.01e-04 1.42e-03 -0.21 0.83183
## Age:MaritalStatusWidowed -1.20e-03 1.39e-03 -0.86 0.38795
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.169 on 4497 degrees of freedom
## Multiple R-squared: 0.575, Adjusted R-squared: 0.569
## F-statistic: 107 on 57 and 4497 DF, p-value: <2e-16
Now that we have removed the influential observations, we would like to conduct different diagnostic tests and see how the 3 models perform.
Diagnostic Tests
#Functions to calculate various diagnostics
get_adj_r2 = function(model) {
summary(model)$adj.r.squared
}
calc_loocv_rmse = function(model) {
sqrt(mean((resid(model) / (1 - hatvalues(model))) ^ 2))
}
get_shapiro = function(model) {
shapiro.test(resid(model))$p.value
}
get_bp = function(model) {
unname(bptest(model)$p.value)
}
# model comparison results
row_names = c("Additive Model", "Model with Interactions", "Model with Transformations")
col_names = c("Model", "BP Test", "Shapiro-Wilk Test", "Adjusted R2")
bp_results = c(get_bp(final_add), get_bp(final_int), get_bp(final_trns))
shapiro_results = c(get_shapiro(final_add), get_shapiro(final_int), get_shapiro(final_trns))
r2_results = c(get_adj_r2(final_add), get_adj_r2(final_int), get_adj_r2(final_trns))
results_table = cbind(row_names, signif(bp_results,6), signif(shapiro_results,6), round(r2_results,6))
knitr::kable(results_table, col.names = col_names, caption = "Model Comparison")
| Model | BP Test | Shapiro-Wilk Test | Adjusted R2 |
|---|---|---|---|
| Additive Model | 1.62383e-36 | 5.56921e-13 | 0.490983 |
| Model with Interactions | 2.60423e-36 | 5.26166e-09 | 0.568574 |
| Model with Transformations | 1.28736e-34 | 5.47327e-09 | 0.569475 |
As we see from the table above, we have been able to build 3 models which can explain more than 49% of the variability in the dataset, according to adjusted \(R^2\). As far as adjusted \(R^2\) is concerned, the interaction model and the transformed model perform better than the additive model, by explaining about 7.5% more of the variability. All models have issues with heteroscedasticity and violation of normality assumptions according to the Breusch-Pagan and Shapiro-Wilk tests, respectively. However, it is noted that the Shapiro-Wilks test and the Breusch-Pagan are susceptible to bias with large sample sizes, and may be more appropriate for models using datasets with smaller sample sizes.
Next, we’ll split the data into testing and training sets to see how these 3 models compare when we evaluate the LOOCV-RMSE on the training (unseen) data.
set.seed(420)
#Splitting the data for the Final_add Model
add_idx = sample(nrow(imp_df_1_add_rm), size = trunc(0.80 * nrow(imp_df_1_add_rm)))
add_trn_data = imp_df_1_add_rm[add_idx, ]
add_tst_data = imp_df_1_add_rm[-add_idx, ]
#Splitting the data for the Final_int Model
int_idx = sample(nrow(imp_df_1_int_rm), size = trunc(0.80 * nrow(imp_df_1_int_rm)))
int_trn_data = imp_df_1_int_rm[int_idx, ]
int_tst_data = imp_df_1_int_rm[-int_idx, ]
#Splitting the data for the Final_trns Model
trns_idx = sample(nrow(imp_df_1_trns_rm), size = trunc(0.80 * nrow(imp_df_1_trns_rm)))
trns_trn_data = imp_df_1_trns_rm[trns_idx, ]
trns_tst_data = imp_df_1_trns_rm[-trns_idx, ]
We will re-fit the models using the training data.
#Final_add Model fit with influential observations removed, and using training data
final_add2 = lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus, data = add_trn_data)
#Final_int Model fit with influential observations removed, and using training data
final_int2 = lm(log(BMI) ~ Age + AlcoholYear + SmokeNow + BPDiaAve + BPSysAve +
TVHrsDay + CompHrsDay + Diabetes + HealthGen +
LittleInterest + SleepHrsNight + Race1 + MaritalStatus +
Age:SmokeNow + Age:BPDiaAve + Age:BPSysAve + Age:TVHrsDay +
Age:Diabetes + Age:HealthGen + Age:Race1 + Age:MaritalStatus,
data = int_trn_data)
#Final_add Model fit with influential observations removed, and using training data
final_trns2 = lm(log(BMI) ~ Age + AlcoholYear + SmokeNow + I(BPDiaAve^2) +
log1p(BPSysAve) + TVHrsDay + CompHrsDay + Diabetes +
HealthGen + LittleInterest + SleepHrsNight + Race1 +
MaritalStatus + Age:SmokeNow + Age:I(BPDiaAve^2) +
Age:log1p(BPSysAve) + Age:TVHrsDay + Age:Diabetes +
Age:HealthGen + Age:Race1 + Age:MaritalStatus,
data = trns_trn_data)
# model comparison results
row_names = c("Additive Model", "Model with Interactions", "Model with Transformations")
col_names = c("Model", "LOOCV_RMSE", "Avg Pct Error")
predicted_add2 = exp(predict(final_add2, newdata = add_tst_data))
avg_pct_error_add2 = mean(abs(predicted_add2 - add_tst_data$BMI)
/predicted_add2) * 100
predicted_int2 = exp(predict(final_int2, newdata = int_tst_data))
avg_pct_error_int2 = mean(abs(predicted_int2 - int_tst_data$BMI)
/predicted_int2) * 100
predicted_trns2 = exp(predict(final_trns2, newdata = trns_tst_data))
avg_pct_error_trns2 = mean(abs(predicted_trns2 - trns_tst_data$BMI)
/predicted_trns2) * 100
rmse_results2 = c(calc_loocv_rmse(final_add2), calc_loocv_rmse(final_int2),
calc_loocv_rmse(final_trns2))
avg_pct_error2 = c(avg_pct_error_add2, avg_pct_error_int2, avg_pct_error_trns2)
results_table2 = cbind(row_names, round(rmse_results2,6), round(avg_pct_error2,6))
knitr::kable(results_table2, col.names = col_names, caption = "Model Comparison")
| Model | LOOCV_RMSE | Avg Pct Error |
|---|---|---|
| Additive Model | 0.184747 | 14.933648 |
| Model with Interactions | 0.170929 | 13.839178 |
| Model with Transformations | 0.170808 | 13.758884 |
We would like to select a model with the lowest value for LOOCV-RMSE, so it does not overfit or underfit, and well as has the lowest value for Average Percent Error so it is able to perform well while predicting unseen data. Based on the above results, although the model with transformations has the lowest values for these metrics, there is not a big difference when compared to the other 2 models, especially the model with interactions.
Let’s also look at the plots for the predicted versus the actual values for these 3 models and add the line \(y = x\).
par(mfrow=c(1,3))
plot(add_tst_data$BMI,
predicted_add2,
col = "dodgerblue",
pch = 20,
main = "Additive Model",
xlab = "Actual",
ylab = "Predicted"
)
abline(a=0, b=1, col = "darkorange", lwd = 3)
plot(int_tst_data$BMI,
predicted_int2,
col = "dodgerblue",
pch = 20,
main = "Model with Interactions",
xlab = "Actual",
ylab = "Predicted"
)
abline(a=0, b=1, col = "darkorange", lwd = 3)
plot(trns_tst_data$BMI,
predicted_trns2,
col = "dodgerblue",
pch = 20,
main = "Model with Transformations",
xlab = "Actual",
ylab = "Predicted"
)
abline(a=0, b=1, col = "darkorange", lwd = 3)
We see that the Model with Transformations does better at predicting BMI, however, it does not differ that much from the Model with Interactions. The additive model does not perform as well as the other two models for predicting BMI.
This was a very educational and productive project. We found
motivation for our project in investigating a significant health problem
- obesity. We utilized one of the most widely used data sources for
healthcare studies, the National Health and Nutrition Examination
(NHANES), which we described in the introduction. We were surprised to
find a large amount of missing data, but benefited from learning about
data imputation, and using the mice package to complete our
dataset. There are limitations to using imputed data, and we did not
have enough time to learn more about the methods of minimizing these
risks. We cannot be sure that the imputed data lead to a model that
accurately represented the true relationships between predictors and BMI
in the underlying population. However, we felt that the advantage of
data imputation mitigating the risk of bias due to missing data was
greater than the risks.
Using the procedures of variable selection, model building, model assessment, and others that were learned in the class, we found three useful models. Based on our findings above, we feel that from a prediction and explainability point of view, we would prefer the interaction model, since the transformations do not seem to help much and the interaction model is a little simpler than the model with transformations.
Our model could be useful for predictions in the clinical setting, with the data needed to run the model being readily available from a brief interview with the patient. However, it must be noted that this model has an average percent error of 13.84%. This incomplete accounting of the variability in our model is in part due to the complexity of what determines a person’s BMI. We found many factors that significantly, but only minimally, affected a subject’s BMI. It is clear that a person’s BMI is a multifactorial issue, and many of those factors are not included in this particular dataset, such as genetics, and diet. It is also not clear in what scenarios the prediction of BMI would be useful, as this can simply be measured. Perhaps if the predicted BMI of a patient is greater than their current BMI, we might be concerned about a risk of increasing obesity for this patient, and we may look to encourage them to make appropriate lifestyle changes.
If we instead look at the model from an explainability perspective, we see some significant factors affecting BMI. Some of the predictors that our model shows are associated with an increase in BMI including Age, Diabetes, computer time, and the interaction between age and “never married” status. Other predictors associated with a decrease in BMI include alcohol, smoking, and having little interest in doings things most days. These three are clearly factors that are associated with other poor quality of life states. In the end, the modifiable predictors from our model that may improve BMI are limiting or mitigating computer time (sitting), marriage (perhaps encouraging meaningful connections with other people) and improving general health.
In conclusion, we found this project to be very educational and the skills learned in this class very powerful. The NHANES data is useful for the investigation of obesity, and utilizing a more complete NHANES dataset would likely be even more insightful.
pairs() Plots for Collinearity
AssessmentAlcohol related variables:
to_test = c("Alcohol12PlusYr", "AlcoholDay", "AlcoholYear")
pairs(subset(NHANES, select = to_test))
Smoking and Drug related variables:
to_test = c("SmokeNow", "Smoke100", "SmokeAge", "Marijuana",
"RegularMarij", "AgeRegMarij", "HardDrugs")
pairs(subset(NHANES, select = to_test))
Lifestyle related variables:
to_test = c("PhysActive", "PhysActiveDays", "TVHrsDay",
"CompHrsDay", "TVHrsDayChild", "CompHrsDayChild")
pairs(subset(NHANES, select=to_test))
Cholesterol related variables:
to_test = c("DirectChol", "TotChol", "Diabetes", "DiabetesAge")
pairs(subset(NHANES, select = to_test))
Urine related variables:
to_test = c("UrineVol1", "UrineFlow1", "UrineVol2", "UrineFlow2")
pairs(subset(NHANES, select = to_test))
Mental health related variables:
to_test = c("HealthGen", "DaysPhysHlthBad", "DaysMentHlthBad",
"LittleInterest", "Depressed" )
pairs(subset(NHANES, select = to_test))
# Compare the imputed variables (red) and observed (blue)
densityplot(imp)
# summary(imp)
if (!require(vcd)) {
install.packages("vcd", quiet = TRUE)
}
## Loading required package: vcd
## Loading required package: grid
library(vcd)
# Subset the 'nhanes' dataset using the names of categorical columns in 'categorical_data'
nhanes_categorical_subset = nhanes_select[, c('SmokeNow', 'TVHrsDay', 'CompHrsDay','Diabetes',
'HealthGen', 'LittleInterest', 'Race1',
'MaritalStatus', 'BMI')]
# Assuming 'nhanes_categorical_subset' already contains the 'BMI' column and categorical variables
# Perform a Chi-square test for each categorical variable
for (var in names(nhanes_categorical_subset)) {
if (is.factor(nhanes_categorical_subset[[var]])) {
chi_result <- chisq.test(nhanes_categorical_subset[[var]], nhanes_categorical_subset$BMI)
print(paste("Variable:", var))
print(chi_result)
}
}
## Warning in chisq.test(nhanes_categorical_subset[[var]],
## nhanes_categorical_subset$BMI): Chi-squared approximation may be incorrect
## [1] "Variable: SmokeNow"
##
## Pearson's Chi-squared test
##
## data: nhanes_categorical_subset[[var]] and nhanes_categorical_subset$BMI
## X-squared = 550, df = 249, p-value <2e-16
## Warning in chisq.test(nhanes_categorical_subset[[var]],
## nhanes_categorical_subset$BMI): Chi-squared approximation may be incorrect
## [1] "Variable: TVHrsDay"
##
## Pearson's Chi-squared test
##
## data: nhanes_categorical_subset[[var]] and nhanes_categorical_subset$BMI
## X-squared = 4251, df = 2100, p-value <2e-16
## Warning in chisq.test(nhanes_categorical_subset[[var]],
## nhanes_categorical_subset$BMI): Chi-squared approximation may be incorrect
## [1] "Variable: CompHrsDay"
##
## Pearson's Chi-squared test
##
## data: nhanes_categorical_subset[[var]] and nhanes_categorical_subset$BMI
## X-squared = 4472, df = 2100, p-value <2e-16
## Warning in chisq.test(nhanes_categorical_subset[[var]],
## nhanes_categorical_subset$BMI): Chi-squared approximation may be incorrect
## [1] "Variable: Diabetes"
##
## Pearson's Chi-squared test
##
## data: nhanes_categorical_subset[[var]] and nhanes_categorical_subset$BMI
## X-squared = 1125, df = 350, p-value <2e-16
## Warning in chisq.test(nhanes_categorical_subset[[var]],
## nhanes_categorical_subset$BMI): Chi-squared approximation may be incorrect
## [1] "Variable: HealthGen"
##
## Pearson's Chi-squared test
##
## data: nhanes_categorical_subset[[var]] and nhanes_categorical_subset$BMI
## X-squared = 2901, df = 1284, p-value <2e-16
## Warning in chisq.test(nhanes_categorical_subset[[var]],
## nhanes_categorical_subset$BMI): Chi-squared approximation may be incorrect
## [1] "Variable: LittleInterest"
##
## Pearson's Chi-squared test
##
## data: nhanes_categorical_subset[[var]] and nhanes_categorical_subset$BMI
## X-squared = 1270, df = 612, p-value <2e-16
## Warning in chisq.test(nhanes_categorical_subset[[var]],
## nhanes_categorical_subset$BMI): Chi-squared approximation may be incorrect
## [1] "Variable: Race1"
##
## Pearson's Chi-squared test
##
## data: nhanes_categorical_subset[[var]] and nhanes_categorical_subset$BMI
## X-squared = 2344, df = 1400, p-value <2e-16
## Warning in chisq.test(nhanes_categorical_subset[[var]],
## nhanes_categorical_subset$BMI): Chi-squared approximation may be incorrect
## [1] "Variable: MaritalStatus"
##
## Pearson's Chi-squared test
##
## data: nhanes_categorical_subset[[var]] and nhanes_categorical_subset$BMI
## X-squared = 2971, df = 1545, p-value <2e-16
Additive Model Anova Studies:
# remove PhysActiveDays
model_without = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Race1 + MaritalStatus))
model_with = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + PhysActiveDays + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Race1 + MaritalStatus))
anova(model_without, model_with)
## test statistic df1 df2 dfcom p.value riv
## 2 ~~ 1 0.5063 1 4 4795 0.516 2.867
# remove Poverty
model_without = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus))
model_with = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Race1 + MaritalStatus))
anova(model_without, model_with)
## test statistic df1 df2 dfcom p.value riv
## 2 ~~ 1 2.363 1 4 4796 0.199 0.8313
# remove HardDrugs
model_without = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + PhysActiveDays + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Race1 + MaritalStatus))
model_with = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
HardDrugs + BPDiaAve + BPSysAve + PhysActiveDays + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest + Poverty +
SleepHrsNight + Race1 + MaritalStatus))
anova(model_without, model_with)
## test statistic df1 df2 dfcom p.value riv
## 2 ~~ 1 1.162 1 4 4794 0.3417 0.7217
fit_add_aic_with = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + UrineVol1 + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus))
# removing UrineVol1
fit_add_aic_without =with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus))
anova(fit_add_aic_without, fit_add_aic_with)
## test statistic df1 df2 dfcom p.value riv
## 2 ~~ 1 7.289 1 4 4797 0.0541 0.2203
fit_add_aic_with = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
TotChol + Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus))
# removing TotChol
fit_add_aic_without =with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus))
anova(fit_add_aic_without, fit_add_aic_with)
## test statistic df1 df2 dfcom p.value riv
## 2 ~~ 1 8.481 1 4 4798 0.04358 0.2853
Interaction Model Anova Studies:
fit_int_aic_with = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus + Age:SmokeNow + Age:BPDiaAve + Age:BPSysAve
+ Age:TVHrsDay + Age:Diabetes + Age:HealthGen + Age:Race1 + Age:MaritalStatus))
# removing Age:TVHrsDay
fit_int_aic_without =with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus + Age:SmokeNow + Age:BPDiaAve + Age:BPSysAve
+ Age:Diabetes + Age:HealthGen + Age:Race1 + Age:MaritalStatus))
anova(fit_int_aic_without, fit_int_aic_with)
## test statistic df1 df2 dfcom p.value riv
## 2 ~~ 1 2.172 6 814 4776 0.04367 0.1529
fit_int_aic_with = with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
Diabetes + HealthGen + LittleInterest +
SleepHrsNight + Race1 + MaritalStatus + Age:SmokeNow + Age:BPDiaAve + Age:BPSysAve
+ Age:TVHrsDay + Age:Diabetes + Age:HealthGen + Age:Race1 + Age:MaritalStatus))
# removing Age:Race1 and interaction
fit_int_aic_without =with(imp, lm(log(BMI) ~ Age + AlcoholYear + SmokeNow +
BPDiaAve + BPSysAve + TVHrsDay + CompHrsDay +
Diabetes + HealthGen + LittleInterest +
SleepHrsNight + MaritalStatus + Age:SmokeNow + Age:BPDiaAve + Age:BPSysAve
+ Age:Diabetes + Age:HealthGen + Age:MaritalStatus))
anova(fit_int_aic_without, fit_int_aic_with)
## test statistic df1 df2 dfcom p.value riv
## 2 ~~ 1 4.669 14 1222 4776 2.32e-08 0.2088